175 research outputs found
Query-guided End-to-End Person Search
Person search has recently gained attention as the novel task of finding a
person, provided as a cropped sample, from a gallery of non-cropped images,
whereby several other people are also visible. We believe that i. person
detection and re-identification should be pursued in a joint optimization
framework and that ii. the person search should leverage the query image
extensively (e.g. emphasizing unique query patterns). However, so far, no prior
art realizes this. We introduce a novel query-guided end-to-end person search
network (QEEPS) to address both aspects. We leverage a most recent joint
detector and re-identification work, OIM [37]. We extend this with i. a
query-guided Siamese squeeze-and-excitation network (QSSE-Net) that uses global
context from both the query and gallery images, ii. a query-guided region
proposal network (QRPN) to produce query-relevant proposals, and iii. a
query-guided similarity subnetwork (QSimNet), to learn a query-guided
reidentification score. QEEPS is the first end-to-end query-guided detection
and re-id network. On both the most recent CUHK-SYSU [37] and PRW [46]
datasets, we outperform the previous state-of-the-art by a large margin.Comment: Accepted as poster in CVPR 201
Transformer Networks for Trajectory Forecasting
Most recent successes on forecasting the people motion are based on LSTM
models and all most recent progress has been achieved by modelling the social
interaction among people and the people interaction with the scene. We question
the use of the LSTM models and propose the novel use of Transformer Networks
for trajectory forecasting. This is a fundamental switch from the sequential
step-by-step processing of LSTMs to the only-attention-based memory mechanisms
of Transformers. In particular, we consider both the original Transformer
Network (TF) and the larger Bidirectional Transformer (BERT), state-of-the-art
on all natural language processing tasks. Our proposed Transformers predict the
trajectories of the individual people in the scene. These are "simple" model
because each person is modelled separately without any complex human-human nor
scene interaction terms. In particular, the TF model without bells and whistles
yields the best score on the largest and most challenging trajectory
forecasting benchmark of TrajNet. Additionally, its extension which predicts
multiple plausible future trajectories performs on par with more engineered
techniques on the 5 datasets of ETH + UCY. Finally, we show that Transformers
may deal with missing observations, as it may be the case with real sensor
data. Code is available at https://github.com/FGiuliari/Trajectory-Transformer.Comment: 18 pages, 3 figure
Human-centric light sensing and estimation from RGBD images: The invisible light switch
Lighting design in indoor environments is of primary importance for at least
two reasons: 1) people should perceive an adequate light; 2) an effective
lighting design means consistent energy saving. We present the Invisible Light
Switch (ILS) to address both aspects. ILS dynamically adjusts the room
illumination level to save energy while maintaining constant the light level
perception of the users. So the energy saving is invisible to them. Our
proposed ILS leverages a radiosity model to estimate the light level which is
perceived by a person within an indoor environment, taking into account the
person position and her/his viewing frustum (head pose). ILS may therefore dim
those luminaires, which are not seen by the user, resulting in an effective
energy saving, especially in large open offices (where light may otherwise be
ON everywhere for a single person). To quantify the system performance, we have
collected a new dataset where people wear luxmeter devices while working in
office rooms. The luxmeters measure the amount of light (in Lux) reaching the
people gaze, which we consider a proxy to their illumination level perception.
Our initial results are promising: in a room with 8 LED luminaires, the energy
consumption in a day may be reduced from 18585 to 6206 watts with ILS
(currently needing 1560 watts for operations). While doing so, the drop in
perceived lighting decreases by just 200 lux, a value considered negligible
when the original illumination level is above 1200 lux, as is normally the case
in offices
Class interference regularization
Contrastive losses yield state-of-the-art performance for person re-identification, face verification and few shot learning. They have recently outperformed the cross-entropy loss on classification at the ImageNet scale and outperformed all self-supervision prior results by a large margin (SimCLR). Simple and effective regularization techniques such as label smoothing and self-distillation do not apply anymore, because they act on multinomial label distributions, adopted in cross-entropy losses, and not on tuple comparative terms, which characterize the contrastive losses.
Here we propose a novel, simple and effective regularization technique, the Class Interference Regularization (CIR), which applies to cross-entropy losses but is especially effective on contrastive losses. CIR perturbs the output features by randomly moving them towards the average embeddings of the negative classes. To the best of our knowledge, CIR is the first regularization technique to act on the output features.
In experimental evaluation, the combination of CIR and a plain Siamese-net with triplet loss yields best few-shot learning performance on the challenging tieredImageNet. CIR also improves the state-of-the-art technique in person re-identification on the Market-1501 dataset, based on triplet loss, and the state-of-the-art technique in person search on the CUHK-SYSU dataset, based on a cross-entropy loss. Finally, on the task of classification CIR performs on par with the popular label smoothing, as demonstrated for CIFAR-10 and -100
MX-LSTM: mixing tracklets and vislets to jointly forecast trajectories and head poses
Recent approaches on trajectory forecasting use tracklets to predict the
future positions of pedestrians exploiting Long Short Term Memory (LSTM)
architectures. This paper shows that adding vislets, that is, short sequences
of head pose estimations, allows to increase significantly the trajectory
forecasting performance. We then propose to use vislets in a novel framework
called MX-LSTM, capturing the interplay between tracklets and vislets thanks to
a joint unconstrained optimization of full covariance matrices during the LSTM
backpropagation. At the same time, MX-LSTM predicts the future head poses,
increasing the standard capabilities of the long-term trajectory forecasting
approaches. With standard head pose estimators and an attentional-based social
pooling, MX-LSTM scores the new trajectory forecasting state-of-the-art in all
the considered datasets (Zara01, Zara02, UCY, and TownCentre) with a dramatic
margin when the pedestrians slow down, a case where most of the forecasting
approaches struggle to provide an accurate solution.Comment: 10 pages, 3 figures to appear in CVPR 201
Joint Detection and Tracking in Videos with Identification Features
Recent works have shown that combining object detection and tracking tasks,
in the case of video data, results in higher performance for both tasks, but
they require a high frame-rate as a strict requirement for performance. This is
assumption is often violated in real-world applications, when models run on
embedded devices, often at only a few frames per second.
Videos at low frame-rate suffer from large object displacements. Here
re-identification features may support to match large-displaced object
detections, but current joint detection and re-identification formulations
degrade the detector performance, as these two are contrasting tasks. In the
real-world application having separate detector and re-id models is often not
feasible, as both the memory and runtime effectively double.
Towards robust long-term tracking applicable to reduced-computational-power
devices, we propose the first joint optimization of detection, tracking and
re-identification features for videos. Notably, our joint optimization
maintains the detector performance, a typical multi-task challenge. At
inference time, we leverage detections for tracking (tracking-by-detection)
when the objects are visible, detectable and slowly moving in the image. We
leverage instead re-identification features to match objects which disappeared
(e.g. due to occlusion) for several frames or were not tracked due to fast
motion (or low-frame-rate videos). Our proposed method reaches the
state-of-the-art on MOT, it ranks 1st in the UA-DETRAC'18 tracking challenge
among online trackers, and 3rd overall.Comment: Accepted at Image and Vision Computing Journa
Compositional Semantic Mix for Domain Adaptation in Point Cloud Segmentation
Deep-learning models for 3D point cloud semantic segmentation exhibit limited
generalization capabilities when trained and tested on data captured with
different sensors or in varying environments due to domain shift. Domain
adaptation methods can be employed to mitigate this domain shift, for instance,
by simulating sensor noise, developing domain-agnostic generators, or training
point cloud completion networks. Often, these methods are tailored for range
view maps or necessitate multi-modal input. In contrast, domain adaptation in
the image domain can be executed through sample mixing, which emphasizes input
data manipulation rather than employing distinct adaptation modules. In this
study, we introduce compositional semantic mixing for point cloud domain
adaptation, representing the first unsupervised domain adaptation technique for
point cloud segmentation based on semantic and geometric sample mixing. We
present a two-branch symmetric network architecture capable of concurrently
processing point clouds from a source domain (e.g. synthetic) and point clouds
from a target domain (e.g. real-world). Each branch operates within one domain
by integrating selected data fragments from the other domain and utilizing
semantic information derived from source labels and target (pseudo) labels.
Additionally, our method can leverage a limited number of human point-level
annotations (semi-supervised) to further enhance performance. We assess our
approach in both synthetic-to-real and real-to-real scenarios using LiDAR
datasets and demonstrate that it significantly outperforms state-of-the-art
methods in both unsupervised and semi-supervised settings.Comment: TPAMI. arXiv admin note: text overlap with arXiv:2207.0977
HYperbolic Self-Paced Learning for Self-Supervised Skeleton-based Action Representations
Self-paced learning has been beneficial for tasks where some initial
knowledge is available, such as weakly supervised learning and domain
adaptation, to select and order the training sample sequence, from easy to
complex. However its applicability remains unexplored in unsupervised learning,
whereby the knowledge of the task matures during training. We propose a novel
HYperbolic Self-Paced model (HYSP) for learning skeleton-based action
representations. HYSP adopts self-supervision: it uses data augmentations to
generate two views of the same sample, and it learns by matching one (named
online) to the other (the target). We propose to use hyperbolic uncertainty to
determine the algorithmic learning pace, under the assumption that less
uncertain samples should be more strongly driving the training, with a larger
weight and pace. Hyperbolic uncertainty is a by-product of the adopted
hyperbolic neural networks, it matures during training and it comes with no
extra cost, compared to the established Euclidean SSL framework counterparts.
When tested on three established skeleton-based action recognition datasets,
HYSP outperforms the state-of-the-art on PKU-MMD I, as well as on 2 out of 3
downstream tasks on NTU-60 and NTU-120. Additionally, HYSP only uses positive
pairs and bypasses therefore the complex and computationally-demanding mining
procedures required for the negatives in contrastive techniques. Code is
available at https://github.com/paolomandica/HYSP.Comment: Accepted at ICLR 202
- …